Goto

Collaborating Authors

 xt 1


Appendix

Neural Information Processing Systems

We have shown experimentally that our method is effective in a variety of domains; however, other problem domains may require additional hyperparameter tuning, which can be expensive.


Universality of Gaussian-Mixture Reverse Kernels in Conditional Diffusion

Ishtiaque, Nafiz, Haque, Syed Arefinul, Alam, Kazi Ashraful, Jahara, Fatima

arXiv.org Machine Learning

We prove that conditional diffusion models whose reverse kernels are finite Gaussian mixtures with ReLU-network logits can approximate suitably regular target distributions arbitrarily well in context-averaged conditional KL divergence, up to an irreducible terminal mismatch that typically vanishes with increasing diffusion horizon. A path-space decomposition reduces the output error to this mismatch plus per-step reverse-kernel errors; assuming each reverse kernel factors through a finite-dimensional feature map, each step becomes a static conditional density approximation problem, solved by composing Norets' Gaussian-mixture theory with quantitative ReLU bounds. Under exact terminal matching the resulting neural reverse-kernel class is dense in conditional KL.


Acceleration through Optimistic No-Regret Dynamics

Jun-Kun Wang, Jacob D. Abernethy

Neural Information Processing Systems

Zero-sum games can be solved using online learning dynamics, where a classical technique involves simulating two no-regret algorithms that play against each other and, afterT rounds, the average iterate is guaranteed to solve the original optimization problem with error decaying asO(logT/T). In this paper we show that the technique can be enhanced to a rate ofO(1/T2) by extending recent work [22, 25] that leverages optimistic learning to speed upequilibrium computation.



Gradient Descent Meets Shift-and-Invert Preconditioning for Eigenvector Computation

Zhiqiang Xu

Neural Information Processing Systems

Shift-and-invert preconditioning, as a classic acceleration technique for the leading eigenvector computation, has received much attention again recently, owing to fast least-squares solvers for efficiently approximating matrix inversions in power iterations.


cell

Neural Information Processing Systems

However,theanalysis ofsuchdataposes challenges due to the high levels of noise, sparsity, and data scale encountered.